930 research outputs found
Probabilistic data flow analysis: a linear equational approach
Speculative optimisation relies on the estimation of the probabilities that
certain properties of the control flow are fulfilled. Concrete or estimated
branch probabilities can be used for searching and constructing advantageous
speculative and bookkeeping transformations.
We present a probabilistic extension of the classical equational approach to
data-flow analysis that can be used to this purpose. More precisely, we show
how the probabilistic information introduced in a control flow graph by branch
prediction can be used to extract a system of linear equations from a program
and present a method for calculating correct (numerical) solutions.Comment: In Proceedings GandALF 2013, arXiv:1307.416
Estimating the Maximum Information Leakage
none2noopenAldini, Alessandro; DI PIERRO, A.Aldini, Alessandro; DI PIERRO, A
A new tool for the performance analysis of massively parallel computer systems
We present a new tool, GPA, that can generate key performance measures for
very large systems. Based on solving systems of ordinary differential equations
(ODEs), this method of performance analysis is far more scalable than
stochastic simulation. The GPA tool is the first to produce higher moment
analysis from differential equation approximation, which is essential, in many
cases, to obtain an accurate performance prediction. We identify so-called
switch points as the source of error in the ODE approximation. We investigate
the switch point behaviour in several large models and observe that as the
scale of the model is increased, in general the ODE performance prediction
improves in accuracy. In the case of the variance measure, we are able to
justify theoretically that in the limit of model scale, the ODE approximation
can be expected to tend to the actual variance of the model
Measuring the confinement of probabilistic systems
AbstractIn this paper we lay the semantic basis for a quantitative security analysis of probabilistic systems by introducing notions of approximate confinement based on various process equivalences. We re-cast the operational semantics classically expressed via probabilistic transition systems (PTS) in terms of linear operators and we present a technique for defining approximate semantics as probabilistic abstract interpretations of the PTS semantics. An operator norm is then used to quantify this approximation. This provides a quantitative measure É› of the indistinguishability of two processes and therefore of their confinement. In this security setting a statistical interpretation is then given of the quantity É› which relates it to the number of tests needed to breach the security of the system
Probabilistic Constraint Handling Rules
Abstract Classical Constraint Handling Rules (CHR) provide a powerful tool for specifying and implementing constraint solvers and programs. The rules of CHR rewrite constraints (non-deterministically) into simpler ones until they are solved. In this paper we introduce an extension of Constraint Handling Rules (CHR), namely Probabilistic CHRs (PCHR). These allow the probabilistic "weighting" of rules, specifying the probability of their application. In this way we are able to formalise various randomised algorithms such as for example Simulated Annealing. The implementation is based on source-to-source transformation (STS). Using a recently developed prototype for STS for CHR, we could implement probabilistic CHR in a concise way with a few lines of code in less than one hour
Structure Learning of Quantum Embeddings
The representation of data is of paramount importance for machine learning
methods. Kernel methods are used to enrich the feature representation, allowing
better generalization. Quantum kernels implement efficiently complex
transformation encoding classical data in the Hilbert space of a quantum
system, resulting in even exponential speedup. However, we need prior knowledge
of the data to choose an appropriate parametric quantum circuit that can be
used as quantum embedding. We propose an algorithm that automatically selects
the best quantum embedding through a combinatorial optimization procedure that
modifies the structure of the circuit, changing the generators of the gates,
their angles (which depend on the data points), and the qubits on which the
various gates act. Since combinatorial optimization is computationally
expensive, we have introduced a criterion based on the exponential
concentration of kernel matrix coefficients around the mean to immediately
discard an arbitrarily large portion of solutions that are believed to perform
poorly. Contrary to the gradient-based optimization (e.g. trainable quantum
kernels), our approach is not affected by the barren plateau by construction.
We have used both artificial and real-world datasets to demonstrate the
increased performance of our approach with respect to randomly generated PQC.
We have also compared the effect of different optimization algorithms,
including greedy local search, simulated annealing, and genetic algorithms,
showing that the algorithm choice largely affects the result
Linear Embedding for a Quantitative Comparison of Language Expressiveness
Abstract We introduce the notion of linear embedding which refines Shapiro's notion of embedding by recasting it in a linear-space based semantics setting. We use this notion to compare the expressiveness of a class of languages that employ asynchronous communication primitives a la Linda. The adoption of a linear semantics in which the observables of a language are linear operators (matrices) representing the programs transition graphs allows us to give quantitative estimates of the different expressive power of languages, thus improving previous results in the field
- …